A convergence proof of the split Bregman method for regularized least-squares problems
نویسندگان
چکیده
The split Bregman (SB) method [14] is a fast splitting-based algorithm that solves image reconstruction problems with general l1, e.g., total-variation (TV) and compressed sensing (CS), regularizations by introducing a single variable split to decouple the data-fitting term and the regularization term, yielding simple subproblems that are separable (or partially separable) and easy to minimize. Several convergence proofs have been proposed [2, 9, 20], and these proofs either impose a “full column rank” assumption to the split or assume exact updates in all subproblems. However, these assumptions are impractical in many applications such as parallel magnetic resonance (MR) and X-ray computed tomography (CT) image reconstructions [3, 4, 14, 19], where the inner leastsquares problem usually cannot be solved efficiently due to the highly shift-variant Hessian. In this paper, we show that when the data-fitting term is quadratic, e.g., in image restoration problems with Gaussian noise, the SB method is a convergent alternating direction method of multipliers (ADMM) [1, 8, 12, 13], and a straightforward convergence proof with inexact updates is given using [8, Theorem 8]. Furthermore, since the SB method is just a special case of an ADMM algorithm, it seems likely that the ADMM algorithm will be faster than the SB method if the augmented Largangian (AL) penalty parameters are selected appropriately. To have a concrete example, we conduct a convergence rate analysis of the ADMM algorithm with two split variables (the SB method is just a special case of the two-split ADMM algorithm) for image restoration problems with quadratic data-fitting term and regularization term. According to our analysis, we can show that the two-split ADMM algorithm can be faster than the SB method if the AL penalty parameter of the SB method is suboptimal. Numerical experiments were conducted to verify our analysis.
منابع مشابه
A comparison of the computational performance of Iteratively Reweighted Least Squares and alternating minimization algorithms for ℓ1 inverse problems
Alternating minimization algorithms with a shrinkage step, derived within the Split Bregman (SB) or Alternating Direction Method of Multipliers (ADMM) frameworks, have become very popular for `-regularized problems, including Total Variation and Basis Pursuit Denoising. It appears to be generally assumed that they deliver much better computational performance than older methods such as Iterativ...
متن کاملFast Splitting-Based Ordered-Subsets X-Ray CT Image Reconstruction
Using non-smooth regularization in X-ray computed tomography (CT) image reconstruction has become more popular these days due to the recent resurgence of the classic augmented Lagrangian (AL) methods in fields such as totalvariation (TV) denoising and compressed sensing (CS). For example, undersampling projection views is one way to reduce radiation dose in CT scans; however, this causes strong...
متن کاملA Least Squares Approach to Estimating the Average Reservoir Pressure
Least squares method (LSM) is an accurate and rapid method for solving some analytical and numerical problems. This method can be used to estimate the average reservoir pressure in well test analysis. In fact, it may be employed to estimate parameters such as permeability (k) and pore volume (Vp). Regarding this point, buildup, drawdown, late transient test data, modified Muskat method, interfe...
متن کاملMultisplitting for regularized least squares with Krylov subspace recycling
The method of multisplitting, implemented as a restricted additive Schwarz type algorithm, is extended for the solution of regularized least squares problems. The presented non-stationary version of the algorithm uses dynamic updating of the weights applied to the subdomains in reconstituting the global solution. Standard convergence results follow from extensive prior literature on linear mult...
متن کاملSuperlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis
We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1402.4371 شماره
صفحات -
تاریخ انتشار 2014